Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

[numpy] operator around #16126

Merged
merged 1 commit into from
Sep 23, 2019
Merged

[numpy] operator around #16126

merged 1 commit into from
Sep 23, 2019

Conversation

tingying2020
Copy link
Contributor

Create a new branch and move around to np_elemwise_unary_op_basic.

@haojin2

};

template<int req>
struct around_forwardint{
Copy link
Contributor

@haojin2 haojin2 Sep 12, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Get rid of this kernel after you switch to identity below.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

&& param.decimals > 0) {
MSHADOW_TYPE_SWITCH(out_data.type_flag_, DType, {
MXNET_ASSIGN_REQ_SWITCH(req[0], req_type, {
Kernel<around_forwardint<req_type>, xpu>::Launch(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

simply use identity kernel instead of your new kernel

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

for hybridize in [True, False]:
for oneType in types:
rtol=1e-3
atol=1e-5
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rtol, atol = 1e-3, 1e-5

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

return F.np.around(x, self.decimals)

shapes = [(), (1,), (1, 1), (1, 2, 3), (1, 0), (3, 0, 2)] # test_shapes, remember to include zero-dim shape and zero-size shapes
types = ['int32', 'int64', 'float32', 'double']
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

types = ['int32', 'int64', 'float32', 'float64']

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

def hybrid_forward(self, F, x):
return F.np.around(x, self.decimals)

shapes = [(), (1,), (1, 1), (1, 2, 3), (1, 0), (3, 0, 2)] # test_shapes, remember to include zero-dim shape and zero-size shapes
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shapes = [(), (1, 2, 3), (1, 0)]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

rtol=1e-3
atol=1e-5
for shape in shapes:
for d in range(-10, 11):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

too many cases for d, simply reduce to something like -5, 6

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@tingying2020 tingying2020 force-pushed the ms_around_new branch 4 times, most recently from 848ce30 to 81649d1 Compare September 13, 2019 02:59
Copy link
Contributor

@haojin2 haojin2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@haojin2 haojin2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@tingying2020 tingying2020 force-pushed the ms_around_new branch 2 times, most recently from 006ec32 to 1c2d15c Compare September 19, 2019 14:23
* change the name of argument

* add doc in three files and fix some bug

* change the data type in .h and add test function

    cancel optimization when abs(temp) < 0.5
    modify test on cpu and add test on gpu
    do not support float16
    edit testcase on gpu and add 'Do not support float16 on doc'

* edit doc: support scalar

* adjust the format

* add license

* fix format error

* delete gpu test

* move around to np_elemwise_unary_op_basic

* edit AroundOpType

* replace int kernel with identity_with_cast and fix format error

* delete unused req_type
@reminisce reminisce merged commit 7344c91 into apache:master Sep 23, 2019
drivanov pushed a commit to drivanov/incubator-mxnet that referenced this pull request Sep 26, 2019
* change the name of argument

* add doc in three files and fix some bug

* change the data type in .h and add test function

    cancel optimization when abs(temp) < 0.5
    modify test on cpu and add test on gpu
    do not support float16
    edit testcase on gpu and add 'Do not support float16 on doc'

* edit doc: support scalar

* adjust the format

* add license

* fix format error

* delete gpu test

* move around to np_elemwise_unary_op_basic

* edit AroundOpType

* replace int kernel with identity_with_cast and fix format error

* delete unused req_type
larroy pushed a commit to larroy/mxnet that referenced this pull request Sep 28, 2019
* change the name of argument

* add doc in three files and fix some bug

* change the data type in .h and add test function

    cancel optimization when abs(temp) < 0.5
    modify test on cpu and add test on gpu
    do not support float16
    edit testcase on gpu and add 'Do not support float16 on doc'

* edit doc: support scalar

* adjust the format

* add license

* fix format error

* delete gpu test

* move around to np_elemwise_unary_op_basic

* edit AroundOpType

* replace int kernel with identity_with_cast and fix format error

* delete unused req_type
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants